AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Mixed-precision quantization

# Mixed-precision quantization

Deepseek R1 0528 GPTQ Int4 Int8Mix Compact
MIT
The GPTQ quantized version of the DeepSeek-R1-0528 model, using a quantization scheme of Int4 + selective Int8, which reduces the file size while ensuring the generation quality.
Large Language Model Transformers
D
QuantTrio
258
1
Gemma 3 4b It Abliterated GGUF
MIT
An innovative quantization solution that achieves smaller model size while maintaining high performance through mixed-precision quantization.
Large Language Model English
G
ZeroWw
247
4
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase